Index
Digital Video Page
- A/D, D/A Conversion for HDTV - A/D and D/A conversion of video and graphics signals in everyday consumer devices is becoming more widespread Rate this link
- alt.video.digital-tv FAQ Rate this link
- Basic principles of digital video - chapter 1 from Rate this link
- Demystifying Digital Formats: So Many FormatsSo Much Data - There are many systems and formats for transporting digital information from one location to another Rate this link
- Fields: Why Video Is Crucially Different from Graphics - The major video signals used in the world today are field-based, not frame based. Whenever you deal with video, it is absolutely crucial that you understand a few basic facts about fields. Correctly dealing with fields in software is tricky; it is fundamentally different than dealing with plain ol' graphics images. This document explains many of these basic concepts. Rate this link
- Integrated active video filters in multimedia - The increased demand for consumer multimedia systems challenges system designers to provide cost-effective solutions to capitalise on the growth potential in graphics display technologies. This paper will discuss the specific benefits of using an integrated solution for analogue video filtering. Rate this link
- It's Video... It's PC Graphics... No, It's Digital TV - Know Your Video Format to Select the Right ADC - PC and TV applications are converging, requiring one box (set-top box, TV set) to process signals that were originally used in different environments Rate this link
- Square and Non-Square Pixels - Pixels in the graphics world are square. Pixels in an ITU-R BT.601-4 digital video signal (also known as Rec. 601 and formerly CCIR 601) are non-square. The term which describes this difference is pixel aspect ratio. The pixel aspect ratio for square pixels is 1/1. Rate this link
- Take the rough edges out of video-filter design - Incorrectly processed image-frequency information can distort displays generated from digital-video sources. Oversampling and well-implemented video-DAC-output filters can save the day, but improperly designed filters can make matters worse. Before you design your next digital-video system, take some time to investigate video-reconstruction-filter design and trade-offs in oversampling. Rate this link
- Technology: HDTV - many articles Rate this link
- Understanding 4:1:1 and 4:2:2 Sampling Rates - depending on the digital format the video signal will be sampled at either 4:1:1 or 4:2:2 sampling Rate this link
- Video Compression Math - there has been a lot of confuzion in marketing literature concerning video compression, most of the confusion stems from a misunderstanding of the math and the terms Rate this link
- Video improvements obviate big bit streams - As digital television continues its frustrating nonemergence, interest in interim technologies that make current video sources look their best is on the rise. These enhancements may obsolete HDTV before it gets off the ground. Rate this link
- YCbCr to RGB Considerations - application note in PDF format Rate this link
General information
Digital video technology is a method of representing video image signal using binary numbers. Simply stated, digital video is nothing more than the digitizing of the old signal now in use in analog video. An analog video signal is converted to digital by the use of an analog-to-digital (A/D) converter chip by taking samples of the signal at a fixed time interval (sampling frequency). Assigning a binary number to these samples, this digital stream is then recorded onto storage media (magnetic tape, optical disk, hard disk or computer memory) or transmission path (telecommunication network, Internet, digital satellite, digital TV transmission). Upon playback, a digital-to-analog (D/A) converter chip reads the binary data and reconstructs the original analog signal. This process virtually eliminates generation loss as every digital-to-digital copy is theoretically an exact duplicate of the original.This allows copying the video material multiple times without degradation that many analogue systems cause to the image quality. Digital signals are virtually immune to noise, distortion, crosstalk, and other quality problems (if systems are workign properly). In addition, digitally based equipment often offers advantages in cost, features, performance and reliability when compared to analog equipment.Digital systems are not perfect and specialized hardware/software is used inside equipment to correct all but the most severe data loss. Because a video signal converted to digital format needs lots of data bandwidth, in many application some for of (lossy) video data compression is used to keep the amount of data to be stored and transmitted at reasonable limits. Modern digital video compression systems can reduce the amount of data needed to a very small faction of the original A/D converter data rate without much degration in picture quality.Using computers and communication systems, it is easy to acquire, process,transmit, and display photographic-quality still color pictures. The technologies of digital video are necessary to achieve smooth motionand accurate color representation. Digital video technologies arean essential part of multimedia, image communication and brodcast industry(in both video material production and distribution),
- A Technical Introduction to Digital Video - introduction chapers of the book Rate this link
- CNET Review: Convert VHS to Digital - Most of us have videos of weddings, birthdays, and a plethora of other important personal events. But up until the last couple of years, we've had to capture them on analog tape (VHS, Beta, and so forth) that not only degrades with each playback, but with the passing of time, as well. Want to avoid the "every copy looks a little worse than the original" curse? There's only one solution: go digital. For digital movies that can be played on both your computer and on standalone players, you have two practical media choices: write-once CD-R and DVD-R/+R. Rate this link
- Demystifying Digital Formats - There are many systems and formats for transporting digital information from one location to another. This article explains the fundamentals of digital data transmission and the myriad formats created in the pursuit of a robust yet compact data-handling scheme. Rate this link
- Matrox Camera Interface Guide - interfacing cameras to video digitizing cards from Rate this link
- The Book: An Engineers's Guide to the Digital Transition - how to move to digital video systems, full book on-line Rate this link
- The book II: More Engineering Guidance for the Digital Transition - how to move to digital video systems, full book on-line Rate this link
- The Digital Fact Book - on-line book on digital video technology Rate this link
- The Video Engineer's Guide to Digital Audio - This guide is intended for the television engineer faced with the problem of integrating digital audio systems in a television environment. Rate this link
- Video Demystified - introduction to very good video book Rate this link
Digital video guides and books
- Color space FAQ Rate this link
- Color space - From Wikipedia, the free encyclopedia Rate this link
- The Color Space Conversions Applet Rate this link
- ATSC Standards - digital television standards for USA Rate this link
- A Tutorial on Magic Numbers for High Definition Electronic Production - also info on magic numers in NTSC and PAL systems Rate this link
- DISTRIBUTION of digital television signals Rate this link
- Image resizing and enhanced digital video compression - image resizing and image compression are playing an important role in improving the quality of video images Rate this link
- Software-based PAL colour decoding - This page explains the issues concerning the decoding of colour from broadcast-standard television pictures, and presents software algorithms and a Windows-based application capable of colourising PAL-encoded still-frame images with very high quality. This is an interesting project in practical image-processing! Rate this link
- The J300 Family of Video and Audio Adapters: Architecture and Hardware Design - The J300 family of video and audio adapters provides a feature-rich set of hardware options for Alpha-based workstations. Architecture and design of J300 adapters exploit fast system and I/O buses to allow video data to be treated like any other data type used by the system, independent of the graphics subsystem. This paper describes the architecture used in J300 products, the video and audio features supported, and some key aspects of the hardware design. In particular, the paper describes a simple yet versatile color-map-friendly rendering system that generates high-quality 8-bit image data. Paper describes also some video filters to sharpen and to smooth video picture. Rate this link
- Video quality: a hands-on view - Deinterlacers all may look the same on paper, but engineering intuition suggests that they probably don't produce identical results with real-life video material. EDN takes a look at the contenders and the pretenders. Rate this link
- Video Sampling - some details of most common sampling schemes in use today Rate this link
- Color Spaces - A color space is a model for representing color in terms of intensity values; a color space specifies how color information is represented. It defines a one-, two-, three-, or four-dimensional space whose dimensions, or components, represent intensity values. Rate this link
Technical papers
- Digital Television: The Site - professional business-to-business information about digital television as well as discussions about different aspects of digital television Rate this link
- Digital Video Links - very large link collection Rate this link
- The Pilot European Image Processing Archive Rate this link
Digital video sites
- Data Compression - introductory chapter from a book Rate this link
- EDN hands-on story: Video characterization creates hands-on headaches, part one - How well do video codecs decrease bit rates and still preserve quality? And can you ascertain their credentials without recruiting an army of test subjects? The answer: a qualified "yes." Rate this link
- EDN hands-on story: Video characterization creates hands-on headaches, part two - How well do video codecs decrease bit rates and still preserve quality? And can you ascertain their credentials without recruiting an army of test subjects? The answer: a qualified "yes." Rate this link
- Video Coder Threads Media Needle - Video coding schemes that minimize the amount of data that needs to be sent down the pipe are continually improving. While MPEG-4 holds sway now, work is well under way on a new ITU/ISO standard, H.26L, that could very well reset the media bandwidth requirement. Rate this link
- A Guide to MPEG Fundamentals and Protocol Analysis Rate this link
- MPEG.org - MPEG.ORG is a roadmap to the best MPEG resources on the Internet. If you are interested in the MPEG technology, you sure found the right place! Rate this link
- MPEG Pointers and Resources - from the Reference Website for MPEG! Rate this link
- Divx.com - Divx compression technology is a software application that compresses digital video so it can be downloaded over DSL or cable modems in a relatively short time with no reduced visual quality. DivX is the brand name of a video compression technology created by DivX Networks, Inc., (also known as Project Mayo). The DivX codec (short for compression-decompression) is based on the MPEG-4 compression standard. Rate this link
- Everything you wanted to know about MPEG-4, but were afraid to ask Rate this link
- MPEG Industry Forum - Forum for MPEG-4, MPEG-7 and MPEG-21 technologies Rate this link
- MPEG Industry Forum Rate this link
- Overview of the MPEG-4 Standard Rate this link
- Perfect MPEG-4 Video: Resolution, Data Rate and Picture Quality Rate this link
Video compression
Digital signal compression is the process of digitizing an analog television signal by encoding a TV picture to "1s" and "0s". In video compression, certain redundant details are stripped from each frame of video. This enables more data to be squeezed through a coaxial cable, into a satellite transmission, or a compact disc. The signal is then decoded inside a TV set-top box or CD player. In simpler terms, "video compression is like making concentrated orange juice: water is removed from the juice to more easily transport it, and added back later by the consumer." Heavy research and development into digital compression has taken place over the past years because of the enormous advantages that digital technology can bring to the broadcasting, telecommunications, and computer industries. The use of compressed digital over analog video allows for lower video distribution costs, increases the quality and security of video, and allows for interactivity.
Currently, there are a number of compression technologies that are available. For example MPEG-1, MPEG-2, MPEG-4, Indeo, Cinepak, Montion JPEG, Realvideo, H.261, DV and Divx, . Digital compression can take these many forms and be suited to a multitude of applications. Each compression scheme has its strengths and weaknesses because the codecs you choose will determine how good the images will look, how smoothly the images will flow and what is the needed video data rate for usable picure quality.
General information
MPEG 1 and MPEG 2
MPEG (pronounced M-peg), which stands for Moving Picture Experts Group, is the name of family of standards used for coding audio-visual information (e.g., movies, video, music) in a digital compressed format. The major advantage of MPEG compared to other video and audio coding formats is that MPEG files are much smaller for the same quality. This is because MPEG uses very sophisticated compression techniques. MPEG 1 and MPEG 2 standards made interactive video on CD-ROM and Digital Television possible.
MPEG-1 is the first standard which was adopted in 1991. The goal of MPEG-1 was to develop an algorithm that could compress a video signal and then be able to play it back off a CD-ROM or over telephone lines at a low bit rate (less than 1.2 Mbits per second) at a quality level that could deliver full-motion, full screen, VHS quality from a variety of sources. The MPEG-1 standard is primarily intended to process video at what is known as SIF (Source Input Format) resolution. That is 352x240 pixels at 30 frames per second. This process is one-fourth the resolution of the broadcast television revolution standard called CCIR 601. MPEG-1 standard consists also of the three layers: video, audio, and system. The most common applications for MPEG-1 have been VideoCD and computer video CD-ROMs. The creation of MPEG-2 by the ISO committee was to improve MPEG-1, because it not serve the requirements of the broadcast industry. So the group developed a compression algorithm that processed video at full resolution that would match CCIR 601 video (704 x480 NTSC, 704 X 576 PAL). MPEG-2 took advantage of higher band widths available to deliver higher image resolution and picture resolution. It targets increased image quality, support of interlaced video formats, and provision for multi-resolution scalability. It allows compression at high resolution and higher bit rates than MPEG-1. MPEG-2 runs at a data rate of 6.0 Mbps and is designed for broadcast quality video that delivers better quality at a faster data rate. MPEG-2 is like its predecessor in that the standard consists also of the three layers video, audio, and system. The most common applications for MPEG-2 are digital television and DVD.
MPEG 3
Along with the development of MPEG-2 began work on the MPEG-3 standard. This standard was directed towards the expected market of High Definition Television, HDTV. MPEG-3 targeted HDTV applications with sampling dimensions up to 1920 x 1080 x 30hz and coded bitrates between 20 and 40mbit/sec. However, after research, it was discovered that MPEG-2 and MPEG-1 syntax could work well together for HDTV rate video. With some fine turning MPEG-2 was found to be suitable for HDTV also. MPEG-3 no longer exists because HDTV became part of the MPEG-2 standard.
MPEG 4
MPEG-4 is an ISO/IEC standard developed by MPEG (Moving Picture Experts Group), the committee that also developed the Emmy Award winning standards known as MPEG-1 and MPEG-2. MPEG-4 work started at 1993. The MPEG-4 Version 1 standard was finalized in October 1998 and became an International Standard in the first months of 1999. The fully backward compatible extensions under the title of MPEG-4 Version 2 were frozen at the end of 1999, to acquire the formal International Standard Status early in 2000. Some work, on extensions in specific domains, is still in progress.
Since MPEG-4 adopted an object-based audiovisual representation model with hyperlinking and interaction capabilities and supports both natural and synthetic content, it is expected that this standard will become the information coding playground for future multimedia applications. MPEG 4 provides better compression and more options for future applications. The MPEG-4 Visual standard will allow the hybrid coding of natural (pixel based) images and video together with synthetic (computer generated) scenes.MPEG-4 provides the standardized technological elements enabling the integration of the production, distribution and content access paradigms of the three fields: digital television, interactive graphics applications (synthetic content) and interactive multimedia (World Wide Web, distribution of and access to content).
M-JPEG
Montion JPEG (MJPEG, M-JPEG) is a common name used to refer to many different digital video formats which store the video film as series of JPEG compressed images (video frames or fiels). Motion JPEG compression technology ses the limits of our visual perception to discard information we don't use. Motion JPEG treats each frame as a single image to which it applies JPEG compression. In the compression process first each frame is first broken into 8x8 pixel blocks, then pixel values (brightness and color) are converted to frequencies. This conversion is done using Discrete Cosine Transformation. At it's simplest level we can compress each block of 8x8 pixels by reducing the number of values that are acceptable for the block. Uncompressed we would have 64 values, although many would most likely be similar given that they are in the same area of the picture. There are various methods used to reduce the number of pixel data that needs to stored to give "close enough" picture compared to the original one.DCT compression works very well with soft images, images with large expanses of almost flat color, and images that don't contain a lot of detail. DCT compression has more difficulty with fine detail gradients (blue skies or underwater can look 'quantized'), and noise. In general terms, the practical balance of best picture quality as opposed to storage space taken falls like this:
VHS 2-2.5 MB/sec (80-100 Kbyte/frame, 11:1-8.5:1 compression ratio) SVHS 3-3.75 MB/sec (120 -150 Kbyte/frame, 7:1-5:1 compression ratio)BetacamSP 4.5 MB/sec (180 Kbyte/frame, 4.8:1 compression ration)Compression ratios of around 4:1 or 5:1 are nearly always without visual loss of image quality (unless material is very hard to compress). Critical applications might use higher data rates. Some non linear systems require you to use one compression rate for the entire program while others allow mixed data rates in the same program.For those that require a data rate (compression ratio) to be chosen for the project, knowing your images and how they will compress will help you choose the rate that gives best image quality while minimizing hard drive space needed.
- JPEG to MPEG conversion howto Rate this link
- MJPEG Tools - The mjpeg programs are a set of tools that can do recording of videos and playback, simple cut-and-paste editing and the MPEG compression of audio and video under Linux. Rate this link
DV
DV is nowadays a popular digital video camera format used for consumer and semi-professional use.MiniDV is the "low-cost" digital video (DV) format targeted for comsumer use.The resolution quality of MiniDV and Betacam SP are perceptively similar (=very good). MiniDV has some limitations. The DV and MiniDV formats use IEEE 1394 interfacefor connecting the camcoder to a computer to transfer the video in digital format. DV system uses the 4:1:1 sampling for video signal. It has 480 active for NTSC and around 500 lines horizonal resolution. DV system uses 5:1 DCT-based video compression. Many detractors of the DV format arbitrarily categorize 5:1 compression as "excessive" for broadcast and corporate video production. It is a very good for most uses, but is not always free of artifacts. The use of DV codes has become popular after the introduction of computer based non-linear editing systems for DV cameras. Those system take the DV data from the tape as it is there to the computer through IEEE 1394 interface. Then they process the data in the computer in the native DV camera format.
- Wavelets both implode and explode images - Wavelet video-processing technology offers some alluring features, including high compression ratios and eye-pleasing enlargements. Rate this link
Wavelet video compression
- Stereo/Multiview Image/Video Coding - discussion forum, papers, research groups and test images Rate this link
Other video compression information
- Evaluation of Image Capture Pathways for Multimedia Application Rate this link
- Multimedia Central - The "One-Stop Page" Specifically For Engineers Designing Multimedia ICs or Systems Rate this link
- Tomi Engdahl's multimedia link page Rate this link
- Tomi Engdahl's PC video technology page Rate this link
- Two-Chip Set Safeguards Digital Video Content - a transmitter-receiver pair provides content protection for signals going from a digital video source to a digital monitor Rate this link
Computer based digital video
Digital TV broadcasting
Digital TV broadcasting information is available atVideo Broadcasting Page.
- Demystifying Cables and Connectors for Digital Formats Part 1-BNCs, Coax, and SDI Rate this link
- Demystifying Cables and Connectors for Digital Formats Part 2 - DVI, Firewire, and USB 2.0 Rate this link
- Demystifying Digital Formats: Comparing Digital Formats Rate this link
- Digital Video: Are You 4:2:2 Compliant? Rate this link
- SMPTE 259 Level A: 143 Mbps clock, NTSC 4fsc Composite signal, timing/alignment jitter 1.40 nS
- SMPTE 259 Level B: 177 Mbps clock, PAL 4fsc Composite signal, timing/alignment jitter 1.13 nS
- SMPTE 259 Level C: 270 Mbps clock, 525/625 line Component signal, timing/alignment jitter 0.74 nS
- SMPTE 259 Level D: 360 Mbps clock, 525/625 line Component signal, timing/alignment jitter 0.56 nS
- SMPTE 292: 1485 Mbps clock, HDTV signal, timing jitter 0.67 nS, alignment jitter 0.13 nS
- Demystifying Cables and Connectors for Digital Formats Part 1-BNCs, Coax, and SDI Rate this link
- Demystifying Digital Formats SDTV, HDTV and HD-SDI Rate this link
- Getting the Most from SDI Rate this link
- Routing SDI Through Analog Video Distribution Equipment: A Good Idea? - A/V applications for digital video over the serial digital interface, or SDI as it's commonly called, are on the rise. Originally just the realm of television production and broadcast operations, SDI is used more often to convey professional quality component video among systems. Many system designers are aware that the nominal SDI data signal level is just under one volt, 0.800 volt to be exact. So, a question that often arises is: "May I use analog hardware [switcher, router, DA] for SDI?" Rate this link
- 1394 Trade Association Rate this link
- Demystifying Digital Formats: FireWire - THE Digital A/V Interface Rate this link
- EDN hands-on project: Firewire unleashes the power of digital video - IEEE-1394, or Firewire, serial bus promises to enable a number of new applications that rely on rich, high-speed data streams, such as digital video Rate this link
- Fire on the Wire: The IEEE 1934 High Performance Serial Bus - The IEEE 1394-1995 standard for the High Performance Serial Bus, here abbreviated to 1394, defines a serial data transfer protocol and interconnection system that "provides the same services as modern IEEE-standard parallel busses, but at a much lower cost." 1394 incorporates quite advanced technology, but it's the "much lower cost" feature that assures 1394's adoption for the digital video and audio consumer markets of 1997 and beyond. The capabilities of the 1394 bus are sufficient to support a variety of high-end digital audio/video applications, such as consumer audio/video device control and signal routing, home networking, nonlinear DV editing, and 32-channel (or more) digital audio mixing. Rate this link
- IEEE 1394 drives expand video-storage options - Consumer-electronics manufacturers are considering IEEE 1394 disk drives to solve the cost, upgrade, and reliability problems with today's embedded approach to digital-media storage. Rate this link
- To avoid the land mines, ya gotta know the territory - The 1394 high-speed-serial data-transfer standard promises to bridge the gap between PCs and home-entertainment systems. But to reach the bridge, system architects must cross a mine field of conflicting requirements. To avoid being blown up, find out where the mines are buried. Rate this link
Interfaces
General information articles
SDI
SDI stands for Serial Digital Interface. The Serial Digital Interface-SDI (SMPTE 259M) grew out of the need for longer distance connection of component digital television equipment, the result being the viability of a truly digital broadcast station. SDI is capable of running hundreds of feet and can run thousands of feet if properly distributedTo understand SDI you must understand some history of digital video interfaces. The impetus for serial digital coding and transmission of video heightened with the introduction of the first component digital production video tape recorder in the mid-'80s, known as D1 or CCIR 601. Digital component recording began in 1987 with the creation of the D1 format (SMPTE 125M). The D1 interface is an 8/10 bit parallel system intended for close-in connection between digital tape recorders (19 mm tape). Its interface cabling is short due to the difficulty in maintaining proper bit timing over a byte-wide data channel. Reformatting the byte-wide D1 data via a serializer yields a very high-speed serial data stream. Serializing a 10-bit data word results in a data rate ten times faster. The 27 MHz D1 data becomes serial data at 270 megabits per second for standard component NTSC. Although SDI bit rates are very high, distribution of serial data as a single cable connection presents significant advantages. First, it's much easier (read cheaper) to route and switch one cable than a parallel system of cables. Having all data bits organized as one stream means there will be no issues with clock and data synchronization. Managing bit timing and cable equalization is easier. Data skew problems encountered with multi-conductor cables do not exist.SDI format utilizes a differential signaling technique and NRZI (non-return to zero inverted) coding. Although SDI is transmitted as an unbalanced signal on 75-ohm coax, transmission and reception involves differential amplifiers that format and detect, respectively, both data phases. Utilizing differential reception creates additional headroom and robustness in signal-to-noise performance. SDI is very immune to extraneous noise and low frequency components (hum) because the receiver takes one phase of the data transmission, inverts it, and then adds it to the in-phase portion. Like a regular analog differential amplifier, common mode noise induced into the signal is cancelled out during this inversion and addition operation.SMPTE 259M supports four SDI transmission rates and SMPTE 292M supports 1.485 Gbps for HD SDI.Currently, most serial digital application situations involve standard definition television, and here serial digital component format are the most often used. Component serial digital (4:2:2 digital component PALo or NTSC) requires 270 megabits per second. The SDI encoding algorithm ensures enough signal transitions to embed the clock within the data and minimize any DC component. SDI coaxial cable drivers AC-couple the serial data into the transmission cable, thus providing DC isolation between source and receiver. The cable loss is a serious issue on those high data rated.Well-designed receivers, called Class A type, can recover serial digital data as low as -30 dB at one-half the clock rate from a pristine source, or about 25 millivolts. The one-half clock rate frequency is used to calculate SDI cable loss. For 270 Mbps component SDI, the rate would be 135 MHz. Cable loss specifications for standard SDI, SDTI, and uncompressed SDTV are addressed in SMPTE 259M and ITU-R BT.601. In these standards, the maximum recommended cable length equals 30 dB loss at one-half the clock frequency. This high serial digital signal loss level is acceptable due to the serial digital receiver. Serial digital receivers have special signal recovery processing. SMPTE 259M mentions a typical range of expected SDI receiver sensitivity between 20dB and 30dB at one-half the data clock frequency. Like analog signals, SDI data can be corrupted with improper termination or routing that results in cable reflections. Maintaining a clean distribution path with SDI means that decoding will largely be a function of the decoder sensitivity on the receiving end. Assuming that bit transitions are recognizable, the decoder will only be limited by its peak-to-peak sensitivity. For HD SDI running at 1.5 Gbps, SMPTE 292M governs cable loss calculations. In that standard, maximum cable length equals 20dB loss at one-half the clock frequency. Due to the data coding scheme, the bit rate is effectively the same as the clock frequency in MHz. Recall that digital systems do not perform linearly to cable losses. The system performance depends on the cable loss and on the receiver performance. The economy of distributing SDI and HD SDI lies in the ability of the serial digital receiver to recover a low-level signal. In all cases, your system must operate solidly before the "cliff region" where sudden signal dropout occurs. Recommendations among cable manufacturers will certainly vary, but it is good practice to limit your run lengths to no more than 90% of the calculated value. This provides leeway for cable variations, connector loss, patching equipment, etc. Here is information on some SDI signal versions:
SDTI
The serial digital transport interface (SDTI) utilizes the SDI data format for the transport of other types of digital data. In particular, it is great for transporting compressed SDTV and HDTV throughout a television plant. Any data capable of fitting within the data transport structure (270 Mbps or 360 Mbps) of the standard may be routed via existing SDI equipment. SDTI is defined by SMPTE 305M.
Firewire / IEEE 1394
IEEE 1394 is a fast (up to 400 Mbit/s) serial bus interface. IEEE 1394 was called Firewire before standardization in IEEE to become standard IEEE 1394. Firewire, or IEEE-1394, is that tiny, square-like connector tucked away on the side of your digital camcorder that allows you to upload DV format to your computer, among other things.IEEE 1394 is nowadays used mainly for interconnecting modern digital video equipments to PC. For example practically every DV camera has IEE 1394 interface in it, so with IEEE 1394 interface card and suitable software you can transfer your movies form DV camera to PC hard disk for editing. The DV (Digital Video) recording standard now driving most consumer camcorder purchases, is a serial digital format of 25 Mbps, sometimes called DV25. The Firewire (IEEE 1394) interface conveniently handles the data rate of DV, and then some. The DV format is the first application making tremendous use of the IEEE 1394 capability. IEEE 1394 is also designed be become on universal digital inteface between digital consumer video equipment like DV cameras, DVD players and digital flat panel displays. Devices on the IEE 1394 bus are Hot-Swappable, which means that the bus allows live connection/disconnection of devices. The digital interface supports either asynchronous and isochronous data transfers. Addressing is used to a particular device on the bus. Each device determines its own address. IEEE 1394 supports up to 63 devices at a maximum cable distance between devices of 4.5 meters. However, "powered" Firewire devices and repeaters will repeat a signal and allow you to extend another 15 feet. The maximum devices on the bus is 16 allowing a total maximum cable distance of 72 meters. The 1394 specification limits cable length to 4.5 meters in order to satisfy the round trip time maximum required by the arbitration protocol. Some applications may run longer lengths when the data rate is lowered to the 100 Mbps level. The 1394 system utilizes two shielded twisted pairs and two single wires. The twisted pairs handle differential data and strobe (assists in clock regeneration) while the separate wires provide power and ground for remote devices needing power support. Signal level is 265 mV differential into 110 ohms. The 1394 specification provides electrical performance requirements, which leave open the actual parameters of the cable design. As with all differential signaling systems, pair-to-pair data skew is critical (less than 0.40 nanoseconds). Crosstalk must be maintained below -26 dB from 1 to 500 MHz. The only requirement on the size of wire used is that velocity of propagation must not exceed 5.05 nS/meter. The typical cable has 28 gauge copper twisted pairs and 22 gauge wires for power and ground. A Firewire connected appliance may or may not need power from its host, but must be capable of providing limited power for downstream devices. The 1394 specification supports two plug configurations: a four-pin version and a six-pin version. Six-pin versions can carry all six connections and are capable of providing power to appliances that need it. For independently powered appliances, like camcorders, the four-pin version is used for its compactness. Cable assemblies have the data signal pairs crossed over to avoid polarity issues. All 1394 type appliances have receptacles, which makes for easy upstream-downstream connection with the male-to-male cable. New standard version have increased the avaialble media from original short "Firewire" cable to other medias also. Transmitting data over CAT5 cable allows data at 100Mbps to travel 100m (specified in IEEE 1394b). Fiber cable will allow 100 meter distances at any speed (maximum speed depends on the type of fiber cable).
Related pages
<[email protected]>
Back to ePanorama main page ??